DEPEVAL(summ): Dependency-based Evaluation for Automatic Summaries
نویسنده
چکیده
This paper presents DEPEVAL(summ), a dependency-based metric for automatic evaluation of summaries. Using a reranking parser and a Lexical-Functional Grammar (LFG) annotation, we produce a set of dependency triples for each summary. The dependency set for each candidate summary is then automatically compared against dependencies generated from model summaries. We examine a number of variations of the method, including the addition of WordNet, partial matching, or removing relation labels from the dependencies. In a test on TAC 2008 and DUC 2007 data, DEPEVAL(summ) achieves comparable or higher correlations with human judgments than the popular evaluation metrics ROUGE and Basic Elements (BE).
منابع مشابه
On alternative automated content evaluation measures
In this draft we describe our TAC submissions and post-TAC experiments for Automated Evaluation of Summaries of Peers task of Text Analysis Conference (TAC). We approached the problem using two different approaches. Firstly, we use a generative modeling based approach to capture the sentence level presence of keywords in peer summaries and provide two fairly simple alternatives to identify keyw...
متن کاملText Summarization based on Hanning Window and Dependency Strucuture Analysis
This paper describes the summarization methods used by the team FLAB (gid040) for text summarization tasks in the NTCIR-2 workshop. The focus is on the effectiveness of an extrinsic evaluation based on relevance assessment in information retrieval with reference to the evaluation results obtained for a task B. The team FLAB submitted two types of summaries for the task: baseline summaries and t...
متن کاملارائه یک سیستم هوشمند و معناگرا برای ارزیابی سیستم های خلاصه ساز متون
Nowadays summarizers and machine translators have attracted much attention to themselves, and many activities on making such tools have been done around the world. For Farsi like the other languages there have been efforts in this field. So evaluating such tools has a great importance. Human evaluations of machine summarization are extensive but expensive. Human evaluations can take months to f...
متن کاملA complex network approach to text summarization
Automatic summarization of texts is now crucial for several information retrieval tasks owing to the huge amount of information available in digital media, which has increased the demand for simple, language-independent extractive summarization strategies. In this paper, we employ concepts and metrics of complex networks to select sentences for an extractive summary. The graph or network repres...
متن کاملAn Enhanced Method For Evaluating Automatic Video Summaries
Evaluation of automatic video summaries is a challenging problem. In the past years, some evaluation methods are presented that utilize only a single feature like color feature to detect similarity between automatic video summaries and ground-truth user summaries. One of the drawbacks of using a single feature is that sometimes it gives a false similarity detection which makes the assessment of...
متن کامل